In this work, we propose a novel image reconstruction framework that directly learns a neural implicit representation in k-space for ECG-triggered non-Cartesian Cardiac Magnetic Resonance Imaging (CMR). While existing methods bin acquired data from neighboring time points to reconstruct one phase of the cardiac motion, our framework allows for a continuous, binning-free, and subject-specific k-space representation.We assign a unique coordinate that consists of time, coil index, and frequency domain location to each sampled k-space point. We then learn the subject-specific mapping from these unique coordinates to k-space intensities using a multi-layer perceptron with frequency domain regularization. During inference, we obtain a complete k-space for Cartesian coordinates and an arbitrary temporal resolution. A simple inverse Fourier transform recovers the image, eliminating the need for density compensation and costly non-uniform Fourier transforms for non-Cartesian data. This novel imaging framework was tested on 42 radially sampled datasets from 6 subjects. The proposed method outperforms other techniques qualitatively and quantitatively using data from four and one heartbeat(s) and 30 cardiac phases. Our results for one heartbeat reconstruction of 50 cardiac phases show improved artifact removal and spatio-temporal resolution, leveraging the potential for real-time CMR.
translated by 谷歌翻译
A key barrier to using reinforcement learning (RL) in many real-world applications is the requirement of a large number of system interactions to learn a good control policy. Off-policy and Offline RL methods have been proposed to reduce the number of interactions with the physical environment by learning control policies from historical data. However, their performances suffer from the lack of exploration and the distributional shifts in trajectories once controllers are updated. Moreover, most RL methods require that all states are directly observed, which is difficult to be attained in many settings. To overcome these challenges, we propose a trajectory generation algorithm, which adaptively generates new trajectories as if the system is being operated and explored under the updated control policies. Motivated by the fundamental lemma for linear systems, assuming sufficient excitation, we generate trajectories from linear combinations of historical trajectories. For linear feedback control, we prove that the algorithm generates trajectories with the exact distribution as if they are sampled from the real system using the updated control policy. In particular, the algorithm extends to systems where the states are not directly observed. Experiments show that the proposed method significantly reduces the number of sampled data needed for RL algorithms.
translated by 谷歌翻译
最近的研究表明,基于神经网络的深度推荐系统容易受到对抗性攻击的影响,攻击者可以将精心制作的虚假用户配置文件(即,伪造用户与之互动的一组项目)注入目标推荐系统,以实现恶意目的,例如促进或降低一组目标项目。由于安全性和隐私问题,在黑框设置下执行对抗性攻击更为实用,在黑框设置下,攻击者无法轻松访问目标系统的体系结构/参数和培训数据。但是,在Black-Box设置下生成高质量的假用户配置文件,对于目标系统的资源有限,这是一项挑战。为了应对这一挑战,在这项工作中,我们通过利用项目的属性信息(即项目知识图)引入了一种新颖的策略,这些信息可以公开访问并提供丰富的辅助知识来增强伪造用户配置文件的产生。更具体地说,我们提出了一项知识增强的黑框攻击框架(KGATTACK),以通过深度强化学习技术有效地学习攻击政策,其中知识图无缝集成到层次结构策略网络中,以生成伪造的用户配置文件,以表演对抗性黑色 - 黑色 - - 黑色 - 黑色 - 盒子攻击。在各种现实世界数据集上进行的全面实验证明了在黑框设置下提出的攻击框架的有效性。
translated by 谷歌翻译
本文提出了一种基于逆变器的Volt-VAR控制(IB-VVC)的一步两级深度强化学习(OSTC-DRL)方法。首先,考虑IB-VVC可以作为单周期优化问题进行配制,我们将IB-VVC作为单步马尔可夫决策过程而不是标准的Markov决策过程,从而简化了DRL学习任务。然后,我们设计了单步角色批判性DRL方案,该方案是最近DRL算法的简化版本,它可以成功地避免了Q值高估的问题。此外,考虑VVC的两个目标:最大程度地减少功率损耗并消除违反电压,我们利用两个批评家分别近似两个目标的回报。它简化了每个评论家的近似任务,并避免了评论家学习过程中两个目标之间的相互作用效果。 OSTC-DRL方法集成了单步角色批判性DRL方案和两批评技术。基于OSTC-DRL,我们设计了两种集中式DRL算法。此外,我们将OSTC-DRL扩展到分散的IB-VVC的多代理OSTC-DRL并设计两个多代理DRL算法。模拟表明,所提出的OSTC-DRL具有更快的收敛速度和更好的控制性能,并且多代理OSTC-DRL适用于分散的IB-VVC问题。
translated by 谷歌翻译
最近,模型驱动的深度学习通过用网络模块替换符号器的一阶信息(即(子)梯度或近端运算符)来拓展到级联网络中的一定迭代算法,该算法呈现出更可说明的与常见的数据驱动网络相比,可以预测。相反,理论上,不一定存在这样的功能常规程序,其一级信息与替换的网络模块匹配,这意味着网络输出可能不被原始正则化模型覆盖。此外,到目前为止,在现实假设下,也没有保证展开网络的全球收敛性和鲁棒性(规律性)。为了弥合这一差距,本文建议在展开网络上提出保障方法。具体而言,专注于加速MRI,我们展开了一个零阶算法,网络模块代表常规器本身,使得网络输出可以仍然被正则化模型覆盖。此外,受到深度均衡模型的理想的启发,在反向化之前,我们执行了展开的迭代网络,以收敛到一个固定点,以确保收敛。如果测量数据包含噪声,我们证明了所提出的网络对嘈杂干扰具有强大。最后,数值实验表明,所提出的网络始终如一地优于最先进的MRI重建方法,包括传统的正规化方法和其他深度学习方法。
translated by 谷歌翻译
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
translated by 谷歌翻译
Automatic image colorization is a particularly challenging problem. Due to the high illness of the problem and multi-modal uncertainty, directly training a deep neural network usually leads to incorrect semantic colors and low color richness. Existing transformer-based methods can deliver better results but highly depend on hand-crafted dataset-level empirical distribution priors. In this work, we propose DDColor, a new end-to-end method with dual decoders, for image colorization. More specifically, we design a multi-scale image decoder and a transformer-based color decoder. The former manages to restore the spatial resolution of the image, while the latter establishes the correlation between semantic representations and color queries via cross-attention. The two decoders incorporate to learn semantic-aware color embedding by leveraging the multi-scale visual features. With the help of these two decoders, our method succeeds in producing semantically consistent and visually plausible colorization results without any additional priors. In addition, a simple but effective colorfulness loss is introduced to further improve the color richness of generated results. Our extensive experiments demonstrate that the proposed DDColor achieves significantly superior performance to existing state-of-the-art works both quantitatively and qualitatively. Codes will be made publicly available.
translated by 谷歌翻译
Persuasion modeling is a key building block for conversational agents. Existing works in this direction are limited to analyzing textual dialogue corpus. We argue that visual signals also play an important role in understanding human persuasive behaviors. In this paper, we introduce the first multimodal dataset for modeling persuasion behaviors. Our dataset includes 199 dialogue transcriptions and videos captured in a multi-player social deduction game setting, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes. We provide extensive experiments to show how dialogue context and visual signals benefit persuasion strategy prediction. We also explore the generalization ability of language models for persuasion modeling and the role of persuasion strategies in predicting social deduction game outcomes. Our dataset, code, and models can be found at https://persuasion-deductiongame.socialai-data.org.
translated by 谷歌翻译
Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译